173 research outputs found
Recommended from our members
Burn wound classification model using spatial frequency-domain imaging and machine learning.
Accurate assessment of burn severity is critical for wound care and the course of treatment. Delays in classification translate to delays in burn management, increasing the risk of scarring and infection. To this end, numerous imaging techniques have been used to examine tissue properties to infer burn severity. Spatial frequency-domain imaging (SFDI) has also been used to characterize burns based on the relationships between histologic observations and changes in tissue properties. Recently, machine learning has been used to classify burns by combining optical features from multispectral or hyperspectral imaging. Rather than employ models of light propagation to deduce tissue optical properties, we investigated the feasibility of using SFDI reflectance data at multiple spatial frequencies, with a support vector machine (SVM) classifier, to predict severity in a porcine model of graded burns. Calibrated reflectance images were collected using SFDI at eight wavelengths (471 to 851 nm) and five spatial frequencies (0 to 0.2 mm - 1). Three models were built from subsets of this initial dataset. The first subset included data taken at all wavelengths with the planar (0 mm - 1) spatial frequency, the second comprised data at all wavelengths and spatial frequencies, and the third used all collected data at values relative to unburned tissue. These data subsets were used to train and test cubic SVM models, and compared against burn status 28 days after injury. Model accuracy was established through leave-one-out cross-validation testing. The model based on images obtained at all wavelengths and spatial frequencies predicted burn severity at 24 h with 92.5% accuracy. The model composed of all values relative to unburned skin was 94.4% accurate. By comparison, the model that employed only planar illumination was 88.8% accurate. This investigation suggests that the combination of SFDI with machine learning has potential for accurately predicting burn severity
Basic emotions and adaptation. A computational and evolutionary model
The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions
A computational model of the evolution of antipredator behavior in situations with temporal variation of danger using simulated robots
The threat-sensitive predator avoidance hypothesis states that preys are able to assess the level of danger of the environment by using direct and in-direct predator cues. The existence of a neural system which determines this ability has been studied in many animal species like minnows, mosquitoes and wood frogs. What is still under debate is the role of evolution and learning for the emergence of this assessment system. We propose a bio-inspired computing model of how risk management can arise as a result of both factors and prove its impact on fitness in simulated robotic agents equipped with recurrent neural networks and evolved with genetic algorithm. The agents are trained and tested in environments with different level of danger and their performances are ana-lyzed and compared
Fostering Awareness and Personalization of Learning Artificial Intelligence
This paper illustrates the activities of the projects SMAILE and AILEAP, which are devoted to foster the growth of awareness and readyness to learn artificial intelligence in the general population. The first project was mainly oriented to children and young adults, while the second is more oriented to the personalization of the learning experience also in professionals
Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends
Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications
Spatial Frames of Reference and Action: A Study with Evolved Neuro-agents
Solving spatial tasks is crucial for adaptation and is made possible by the representation of space. It is still debated which is the exact nature of this representation that can rely on egocentric and allocentric frames of reference. In this paper, a modelling approach is proposed to complement research on humans and animal models. Artificial agents, simulated mobile robots ruled by an artificial neural network, are evolved through Evolutionary strategies to solve a spatial task that consists in locating the central area between 2 landmarks in a rectangular enclosure. This is a non-trivial task that requires the agent to identify landmarks’ location, spatial relation between landmarks and landmark position relative to the environment. Different populations of agents with different spatial frames of reference are compared. Results indicate that both egocentric and allocentric frames of reference are effective, but allocentric frames gives advantages and leads to better performance
Analysing E-BTT data: the E-TAN ANALYST prototype
Here we present the first prototype of the E-TAN ANALYST, an app that is able to analyse spatio-temporal data developed in Python. This app was developed to analyse the Enhanced Baking Tray Task (E-BTT) data, an innovative task that exploits Tangible User Interfaces (TUI) and returns a series of sixteen ordered coordinates as output. The E-TAN ANALYST allows the automatic computation of a series of indexes on these coordinates and the graphical representation of E-BTT performance. This elaboration can then be used by the clinician or by the experimenter to analyse the single subject's performance and their spatial exploration
E-TAN platform and E-baking tray task potentialities: New ways to solve old problems
Spatial abilities allow humans to perceive and act in the world around them. Combining technology with a wide used neuropsychological test, the EBaking Tray Task have proved to be very versatile and useful. Here we examine its properties and potentialities, trying to propose new challenges in visuospatial cognition. Firstly, we address to actual algorithms of data analysis and propose new ones. Then we propose several new variables that could be inspected related to spatial exploration measured with this new device: verticality, stress, emotions, explored areas in peri-personal space and so on
- …